7 research outputs found
An Exact Formula for the Average Run Length to False Alarm of the Generalized Shiryaev-Roberts Procedure for Change-Point Detection under Exponential Observations
We derive analytically an exact closed-form formula for the standard minimax
Average Run Length (ARL) to false alarm delivered by the Generalized
Shiryaev-Roberts (GSR) change-point detection procedure devised to detect a
shift in the baseline mean of a sequence of independent exponentially
distributed observations. Specifically, the formula is found through direct
solution of the respective integral (renewal) equation, and is a general result
in that the GSR procedure's headstart is not restricted to a bounded range, nor
is there a "ceiling" value for the detection threshold. Apart from the
theoretical significance (in change-point detection, exact closed-form
performance formulae are typically either difficult or impossible to get,
especially for the GSR procedure), the obtained formula is also useful to a
practitioner: in cases of practical interest, the formula is a function linear
in both the detection threshold and the headstart, and, therefore, the ARL to
false alarm of the GSR procedure can be easily computed.Comment: 9 pages; Accepted for publication in Proceedings of the 12-th
German-Polish Workshop on Stochastic Models, Statistics and Their
Application
Asymptotically optimal pointwise and minimax quickest change-point detection for dependent data
International audienceWe consider the quickest change-point detection problem in pointwise and minimax settings for general dependent data models. Two new classes of sequential detection procedures associated with the maximal "local" probability of a false alarm within a period of some fixed length are introduced. For these classes of detection procedures, we consider two popular risks: the expected positive part of the delay to detection and the conditional delay to detection. Under very general conditions for the observations, we show that the popular Shiryaev-Roberts procedure is asymptotically optimal, as the local probability of false alarm goes to zero, with respect to both these risks pointwise (uniformly for every possible point of change) and in the minimax sense (with respect to maximal over point of change expected detection delays). The conditions are formulated in terms of the rate of convergence in the strong law of large numbers for the log-likelihood ratios between the "change" and "no-change" hypotheses, specifically as a uniform complete convergence of the normalized log-likelihood ratio to a positive and finite number. We also develop tools and a set of sufficient conditions for verification of the uniform complete convergence for a large class of Markov processes. These tools are based on concentration inequalities for functions of Markov processes and the Meyn-Tweedie geometric ergodic theory. Finally, we check these sufficient conditions for a number of challenging examples (time series) frequently arising in applications, such as autoregression, autoregressive GARCH, etc